You are at the wrong place at the wrong time. A camera reads your license plate. Another scans your face. A computer cross-references your location against a pattern it has flagged as suspicious. A facial recognition algorithm returns your photo as a possible match to a suspect. A detective, trusting the machine, applies for a warrant. Six officers show up at your door.
This is not science fiction. This is the documented reality of how AI-driven surveillance is being deployed by American law enforcement in 2026 — and three private companies have quietly built the infrastructure making it possible, selling their tools to police departments, federal agencies, and intelligence units across the country with minimal oversight, no federal regulation, and no requirement that any of it be disclosed to the courts, to defendants, or to the public.
The Companies Building The Surveillance State
Three vendors form the backbone of what civil liberties researchers are calling an emerging AI surveillance pipeline in American law enforcement.
Flock Safety — Founded in Atlanta in 2017, Flock Safety manufactures automated license plate reader cameras now deployed in thousands of communities across the United States. Its Falcon cameras photograph every vehicle that passes — license plate, make, model, and color — around the clock, without requiring that the driver be suspected of anything. The data is stored in the cloud and can be searched retroactively, giving law enforcement the ability to reconstruct a person’s movements through public spaces days or weeks after the fact, with no warrant and no probable cause required to access the records.
Clearview AI — A New York-based company that built its business by scraping more than 30 billion photographs from social media platforms, news websites, mugshot databases, and other internet sources — without the knowledge or consent of the people pictured. Law enforcement agencies upload a photograph — from a surveillance camera, a crime scene, or a social media post — and the software searches its database and returns potential matches along with identifying information and the websites where those images were found. Clearview AI’s own terms of service acknowledge that results are indicative rather than definitive and should serve only as investigative leads. That disclaimer has not stopped law enforcement from treating them as identifications. Clearview has logged over one million searches by U.S. law enforcement agencies. ICE is among its largest customers.
Palantir Technologies — Founded in 2003 with early funding from the CIA’s venture arm, Palantir builds the software that ties everything together. Its flagship law enforcement platform, Gotham, ingests data from disparate sources — DMV records, arrest histories, social media posts, license plate logs, facial recognition results, financial records, and immigration databases — and builds what analysts call a pattern-of-life profile of individuals and locations. The software enables law enforcement and government analysts to connect vast, disparate datasets, build intelligence profiles, and search for individuals based on characteristics as granular as a tattoo or an immigration status, transforming historically static records into a fluid web of intelligence and surveillance. ICE alone has spent more than $200 million on Palantir contracts. In February 2026, the Department of Homeland Security signed a blanket purchase agreement worth up to $1 billion giving every DHS component access to Palantir tools through 2031.
Rounding out the ecosystem is SoundThinking — formerly known as ShotSpotter — which manufactures acoustic gunshot detection sensors deployed in urban neighborhoods across the country. The company also absorbed the predictive policing software formerly sold under the name PredPol, which used historical arrest data to generate crime hotspot predictions. In Chicago, documented records show officers used knowledge of frequent ShotSpotter alerts as justification to conduct investigative stops and frisks of people found in an area, even when they were not responding to a specific alert — meaning the mere existence of an automated system gave police cause to act against people who had done nothing wrong.
How The Pipeline Works
Put these tools together and the result is a surveillance infrastructure that civil liberties organizations say fundamentally changes the relationship between law enforcement and the public.
A neighborhood is flagged as a high-activity zone based on historical arrest data and SoundThinking alerts. Flock Safety cameras log every vehicle that passes through the area, building a searchable record of movement. Palantir Gotham maps the social networks of known associates using arrest records, license plate data, and scraped social media posts. The system generates what analysts call a pattern-of-life assessment — a predictive profile of who is likely to be at a location, and when.
An innocent person drives through that neighborhood. Their license plate is logged. Their face, captured by a nearby camera, is run through Clearview AI. The algorithm returns a match — not because they are guilty, but because the technology has documented error rates that, for Black women, reach 34.7%. A detective, trained to trust the output, builds a warrant application around the result. The judge, who is never told how the identification was made, signs it.
This is not a hypothetical. It has already happened. Multiple times. With names attached.
The People It Happened To
Robert Williams — Detroit, Michigan, January 2020
Robert Williams was at work when he received a call from Detroit police telling him to turn himself in. When he pulled into his driveway, officers pulled up behind him and handcuffed him in front of his wife and two young daughters — then ages 2 and 5. He had been identified as a shoplifting suspect through a facial recognition match against an expired driver’s license photo. The only evidence linking him to the crime was the AI result. He was held for 30 hours before officers realized they had the wrong man. Williams became the first documented case of wrongful arrest caused by facial recognition technology in United States history.
Porcha Woodruff — Detroit, Michigan, February 2023
Porcha Woodruff was home around 8 a.m. helping her 6- and 12-year-olds get ready for school when six Detroit police officers arrived at her door with an arrest warrant for carjacking and robbery. Eight months pregnant, she pointed to her stomach and asked officers, “Are you kidding, carjacking?” She was handcuffed anyway. She spent 11 hours in jail, began having contractions from stress, and was released on a $100,000 bond. Prosecutors dropped the case a month later. The actual perpetrator of the crime had not been visibly pregnant. Police had used an eight-year-old mugshot photo of Woodruff in the facial recognition search despite having access to her current driver’s license. Woodruff became the first woman known to be wrongfully arrested due to facial recognition — and at the time, the sixth person overall. Every one of the six confirmed victims was Black.
Randal Quran Reid — arrested in Louisiana, December 2022
Police relied solely on a Clearview AI facial recognition result as purported probable cause, despite having signed a service agreement with Clearview acknowledging that results are indicative and not definitive and that officers must conduct further research before acting on them. The detective wrote in the warrant affidavit only that he was advised by a credible source that Reid was the suspect — concealing the role of facial recognition entirely. A judge signed the warrant, unaware of how the identification had been made. Reid had never even been to Louisiana. He spent nearly a week in jail while his parents spent thousands of dollars on legal fees. The warrant was recalled after his attorney presented photos and videos proving he was not the person in the surveillance footage.
Alonzo Sawyer — Maryland, 2022
Sawyer, 58, was arrested by Maryland authorities on suspicion of assaulting a bus driver. He says he only got out of jail because his wife drove 90 miles to personally confront a probation officer whom police had pressured into falsely confirming her husband as the attacker. He recanted his statement. “I knew I was innocent,” Sawyer said. “So how do I beat a machine?”
Harvey Eugene Parks Jr. — New Jersey
Parks was arrested and jailed for 10 days for a crime he did not commit, identified solely through a facial recognition match.
The Bias Is Not A Bug — It Is Documented
The pattern running through every one of these cases is not coincidence. It reflects a fundamental flaw in the technology itself that federal researchers have confirmed and vendors have failed to resolve.
A 2018 study found commercial facial recognition systems produced error rates of just 0.8% for light-skinned men but 34.7% for darker-skinned women — a 40-fold disparity. A 2019 study by the National Institute of Standards and Technology tested 189 facial recognition algorithms from 99 developers and found that African American and Asian faces were 10 to 100 times more likely to be misidentified than white male faces.
In the seven known cases of wrongful arrest following facial recognition matches, police failed to conduct sufficient follow-up investigation in every single case — follow-up that could have prevented the arrest entirely. In at least five of those seven cases, police had been explicitly warned by the technology’s own vendor that a facial recognition result does not constitute a positive identification — and made the arrest anyway.
The consequences for those wrongfully arrested extend far beyond the immediate arrest. All eight people known to have been wrongly arrested described permanent scars: lost jobs, damaged relationships, missed mortgage and car payments. Some had to send their children to counseling to work through the trauma of watching a parent arrested on the front lawn. Most described a lasting fear of police that did not go away after charges were dropped.
The Legal Vacuum
Despite the documented trail of wrongful arrests, false identifications, and constitutional violations, the legal framework governing these technologies in the United States remains almost entirely absent.
At the start of 2025, only 15 states — Washington, Oregon, Montana, Utah, Colorado, Minnesota, Illinois, Alabama, Virginia, Maryland, New Jersey, Massachusetts, New Hampshire, Vermont, and Maine — had passed any legislation governing the use of facial recognition in policing. Thirty-five states have enacted nothing.
There is no federal law requiring law enforcement to disclose to a judge when facial recognition was used to generate probable cause for a warrant. There is no federal registry tracking how often these tools are deployed, against whom, or with what results. There is no federal standard governing how long license plate data can be retained or who can access it.
In Europe, Article 5 of the EU Artificial Intelligence Act, which took effect in February 2025, prohibits the marketing or use of AI systems designed to predict the probability that someone will commit a crime. The United States Congress has passed no equivalent legislation.
The predictive policing market — the industry built on selling these tools to law enforcement — was valued at $5.77 billion in 2025 and is projected to grow to $8.68 billion in 2026, with further expansion to $43.57 billion projected by 2030. The investment is accelerating. The oversight is not.
What Comes Next
The technology is advancing faster than the law. Real-time crime centers — command facilities that pull together feeds from body cameras, license plate readers, facial recognition systems, and acoustic sensors into a single interface — are already operating in more than 250 cities and counties across the country. Axon, best known for manufacturing police Tasers and body cameras, acquired a real-time crime center platform called Fusus, dramatically expanding its surveillance footprint.
Google’s visual-language AI model, updated in late 2024, is expected to dramatically improve law enforcement’s ability to automatically analyze surveillance footage at scale. The result is a system that does not wait for a crime to happen before it begins building a case.
For the innocent person in the wrong place at the wrong time, the question Alonzo Sawyer asked after his wrongful arrest remains unanswered by lawmakers, by courts, and by the companies selling these tools to police departments across America.
“I knew I was innocent,” he said. “So how do I beat a machine?”
The Fallout Files will continue to investigate the use of AI surveillance technology in American law enforcement. If you have information about the use of facial recognition, predictive policing, or automated license plate readers in your community, contact our newsroom.
Sources
- The Conversation — When the Government Can See Everything: How One Company Is Mapping the Nation’s Data (January 2026): theconversation.com/when-the-government-can-see-everything-how-one-company-palantir-is-mapping-the-nations-data-263178
- The Washington Post — Arrested By AI: Police Ignore Standards After Facial Recognition Matches (January 2025): washingtonpost.com/business/interactive/2025/police-artificial-intelligence-facial-recognition
- The Washington Post — Police Seldom Disclose Use of Facial Recognition Despite False Arrests (October 2024): washingtonpost.com/business/2024/10/06/police-facial-recognition-secret-false-arrest
- Innocence Project — Artificial Intelligence Is Putting Innocent People at Risk of Being Incarcerated (January 2025): innocenceproject.org/news/artificial-intelligence-is-putting-innocent-people-at-risk-of-being-incarcerated
- Stateline — Facial Recognition In Policing Is Getting State-By-State Guardrails (February 2025): stateline.org/2025/02/04/facial-recognition-in-policing-is-getting-state-by-state-guardrails
- ACLU — Comment to U.S. Commission on Civil Rights Re: Facial Recognition Technology (April 2024): aclu.org/wp-content/uploads/2024/04/ACLU-Comment-to-USCCR-re-FRT-4.8.2024.pdf
- Michigan Law Quadrangle — Flawed Facial Recognition Technology Leads to Wrongful Arrest and Historic Settlement (Winter 2024-2025): quadrangle.michigan.law.umich.edu/issues/winter-2024-2025/flawed-facial-recognition-technology-leads-wrongful-arrest-and-historic
- CNN — Black Mom Sues City of Detroit After False Arrest While 8 Months Pregnant (August 2023): cnn.com/2023/08/07/us/detroit-facial-recognition-technology-false-arrest-lawsuit
- TheStreet — When AI Meets Law Enforcement: The Future of Predictive Policing (February 2026): thestreet.com/technology/when-ai-meets-law-enforcement-the-future-of-predictive-policing
- Minnesota Journal of Law & Inequality — Fighting Pre-Crime: Law Enforcement, AI, and Predictive Policing Technology (January 2026): lawandinequality.org/2026/01/27/fighting-pre-crime-law-enforcement-artificial-intelligence-and-predictive-policing-technology
- Columbia Law Review — Police Technology Experiments (February 2025): columbialawreview.org/content/police-technology-experiments
- Centre for International Governance Innovation — The Promises and Perils of Predictive Policing (May 2025): cigionline.org/articles/the-promises-and-perils-of-predictive-policing
- Research and Markets — AI in Predictive Policing Market Report 2026: researchandmarkets.com/reports/6226018/ai-in-predictive-policing-market-report
- NAACP — Artificial Intelligence in Predictive Policing Issue Brief (February 2024): naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief